This page last changed on Dec 07, 2004 by scytacki.
(a MAC project conversation but the issues are related)
I'm working on design for the database/web applications that will store the data and produce reports. Thinking from this side of the problem it will be much easier if the data types presented and collected in canonical activity elements are abstracted and generalized. This will allow a standard form of processing the data related to those activity elements rather than special-casing for different elements.
Of course more in depth analysis could also be done by researchers on finer grain data additionally saved in the logs.
What are canonical activity elements? Challenges, Essay-Response Questions, Multiple-Choice Questions ...
Here are some ideas for a Challenge:
Challenge properties:
Challenge results:
- successful: true, false
- efficiency: high, medium, low
- participation: directed, meandering, random
Dan brings up some difficulties with this approach:
At 1:51 PM -0400 5/29/03, Daniel Damelin wrote:
There may be some high level commonalities, for example, was the challenge solved or not, were hints used or not, how many hints. However, the challenges could vary so much that "describing what the student did" can't be standardized if we want to get into detials. For example, one of my challenges is to determine a formula for Force given a whole bunch of other "properties" of a ball in a dynamica view. To solve this the person could guess, add initial forces and/or velocities to the ball, and/or place one or more velocity and/or force boosters in the dynamica view, and/or adjust their own view to only show certain data to facilitate focusing on appropriate information. For this challange you might want to know how they solved the puzzle and that will be very different from another challenge where they need to set a variable or two in a text box. Capturing the interesting information may not be easily done in a standard format, especially if the challange is more open ended. --Dan
My first question is: what are the objectives (research and learning) and how were you planning on assessing the user interaction?
If you watched 10 different people doing your challenge could you create categories to characterize the ways they solved the problem? Perhaps these could be generally useful.
My encouragement for abstraction and generalization could also be combined with additional activity-element specific meta data characterizing specific interaction. This data would not be available in an aggregate form it instead would be available by "drilling down" to individual data or as a list from multiple individuals.
This would be the same report-display functionality as that used to show essay answers to questions. The aggregation is only practical on whether they saw the question and also if they wrote any answer. Without involved semantic analysis I can't automatically pull additional aggregate stats from this type of data. It may be possible to determine the difference between randomly pounding on the keyboard and actually attempting to write something however.
At 11:23 AM 5/29/2003 -0400, Edmund Hazzard wrote:
I asked Stephen about logging a common situation in Dynamica: try a challenge (hitting a target, going through a maze, etc.). Sometimes you win marbles or balloons, sometimes you can just go on after you succeed. Stephen pointed out that it would be good a have a "standard" way in the log file of describing what the student did with the challenge, that would be simpler for the researcher or teacher to understand than the step-by-step log of each action. It should have "standard" variables that are given recognizable names. Should we try to write such a standard format? Should it then go into each script (rather than depending on sorting in the database)? Is this worth doing even if it's specific to Dynamica activities? – Ed
At 1:17 PM -0400 5/29/03, Paul Horwitz wrote:
This relates closely to the conversation that Barbara and I had with Chris Dede and Ed Dieterle this morning. I think it's an excellent idea – the more we can make the logging "automatic" and meaningful, the easier it will be to create these reports. It would be an interesting intellectual exercise, if nothing else, to go over the activities we've got looking for such challenges and trying to categorize them. In our meeting this morning, we talked about giving each kid a "score" for each activity, but also flagging particular events (e.g., "blowing off" a question – evidenced by not spending enough time on that node to have really read and understood the question) indicative of a "struggling" student. The challenges could contribute to both of these goals: (1) by counting up successfully accomplished challenges (and giving "partial credit" to challenges accomplished after n>0 hints) and (2) by flagging repeated failutes to accomplish them. – Paul
|